Keynotes

We are pleased to announce the following keynote speakers for CVMP 2025:

Peter Hedman, Google DeepMind
TBD (Title to be announced)

Abstract and talk details will be announced soon.

Peter Hedman is a staff research scientist at Google DeepMind, where he works on problems at the intersection of computer graphics and vision with a focus on building immersive 3D experiences. Prior to this he received his PhD from the CS Department at UCL in 2019 and received his MSc from Helsinki University in 2015. Peter received the prize for the most distinguished master’s thesis from Finnish Academic Association for Mathematics and Natural Sciences, the 2016 Rabin Ezra scholarship for doctoral students in computer graphics, imaging and vision, ICCV Best Paper Honorable Mention Award in 2021, Best Student Paper Honorable Mention at CVPR 2022, Best Paper Honorable Mention at SIGGRAPH 2024 and was elected as a Eurographics Junior Fellow in 2025.

Peter Hedman

Christian Richardt, Meta Reality Labs
TBD (Title to be announced)

Abstract and talk details will be announced soon.

Christian Richardt is a Research Scientist at Meta Reality Labs in Zurich, Switzerland, and previously at the Codec Avatars Lab in Pittsburgh, USA. Until 2022, he was previously a Reader (=Associate Professor) and EPSRC-UKRI Innovation Fellow in the Visual Computing Group and the CAMERA Centre at the University of Bath. His research interests cover the fields of image processing, computer graphics and computer vision, and his research combines insights from vision, graphics and perception to reconstruct visual information from images and videos, to create high-quality visual experiences with a focus on novel-view synthesis. Christian is delighted to return to CVMP after (co-)chairing CVMP in 2020 and 2021.

Christian Richardt

Angela Dai, Technical University of Munich
TBD (Title to be announced)

Abstract and talk details will be announced soon.

Angela Dai is an Associate Professor at the Technical University of Munich where she leads the 3D AI Lab. Angela’s research focuses on understanding how real-world 3D scenes around us can be modeled and semantically understood. Previously, she received her PhD in computer science from Stanford in 2018, advised by Pat Hanrahan, and her BSE in computer science from Princeton in 2013. Her research has been recognized through an ECVA Young Researcher Award, ERC Starting Grant, Eurographics Young Researcher Award, German Pattern Recognition Award, Google Research Scholar Award, and an ACM SIGGRAPH Outstanding Doctoral Dissertation Honorable Mention.

Angela Dai

Yi-Zhe Song, University of Surrey
Sketch-based Interfaces for Democratising AI-Powered Creative Tools

This keynote examines how sketch-based interfaces democratise AI-powered creative tools, progressing from 2D recognition to immersive 3D generation. Drawing from our decade-long research journey, I demonstrate why sketching represents an essential human-AI interface through its unique balance of simplicity and expressive power. Beginning with Sketch-a-Net, which first surpassed human sketch recognition performance, I establish the foundational principles that enable all sketch-based AI systems. These insights drove practical applications in fine-grained sketch-based image retrieval, proving that simple drawings can unlock complex visual searches more intuitively than text. The talk then explores how 2D sketches enable 3D capabilities. We first show that tablet sketches can effectively retrieve and generate complex 3D models, bridging dimensional barriers without requiring specialised expertise. Moving to the frontier of VR sketching, I present our advances in 3D sketch representation learning that reveal how spatial strokes encode geometric information differently from 2D drawings. The talk concludes with our latest frameworks that enable high-resolution generation on consumer hardware, which will serve (in time) our vision for accessible creative AI: where sketch-based interfaces make advanced capabilities available to all users regardless of technical expertise.

Yi-Zhe Song is Professor of Computer Vision and Machine Learning at the Centre for Vision Speech and Signal Processing (CVSSP) and co-director of the Surrey People-Centred AI Institute. As founder and leader of the SketchX Lab (est. 2012), he has driven groundbreaking research in sketch understanding, including the first deep neural network to surpass human performance in sketch recognition (BMVC 2015 Best Paper Award). His work spans fine-grained sketch-based image retrieval, domain generalisation, and bridging sketch with mainstream computer vision, with recent contributions in sketch-based object recognition earning a Best Paper nomination at CVPR 2023. He serves as Associate Editor for IEEE TPAMI and IJCV and has been Area Chair for ECCV, CVPR, and ICCV. Prof. Song established and directs Surrey’s MSc in AI programme, following a similar initiative he created at Queen Mary University of London.

Yi-Zhe Song